technological progress
AI will transform the 'human job' and enhance skills, says science minister
Patrick Vallance said technological progress was creating'new area' for robots to work in. Patrick Vallance said technological progress was creating'new area' for robots to work in. AI will transform the'human job' and enhance skills, says science minister Patrick Vallance says robots would take away'repetitive' tasks, but Sadiq Khan warns AI will usher in'new era of mass unemployment' Advances in AI and robotics will transform human jobs, starting with roles in warehouses and factories, the UK science minister has said, as the government announced plans to reduce red tape for robot and defence tech companies. Patrick Vallance said technological progress was creating a "whole new area" for robots to work in. "What's really changing now is the combination of AI and robotics. It is opening up a whole new area, particularly in the sorts of things like humanoid robotics. And that will increase productivity, it will change the human job," he told the Guardian.
- Europe > United Kingdom (0.95)
- North America > United States (0.18)
- Europe > Ukraine (0.07)
- Oceania > Australia (0.05)
Can Artificial Intelligence Accelerate Technological Progress? Researchers' Perspectives on AI in Manufacturing and Materials Science
Nelson, John P., Olugbade, Olajide, Shapira, Philip, Biddle, Justin B.
Applications of artificial intelligence or machine learning in research Modes of use Surrogate modeling for physics - based models Modeling of poorly understood phenomena Data preprocessing Large language model use Applications AI/ML as research tool Production process design, monitoring, & output prediction Part design & properties prediction Materials design & properties prediction AI/ML as research product Generative AI design tool for consumers Generic research tasks Large language models for coding Large language models for literature review Benefits of artificial intelligence or machine learning in research Reduction in accuracy/cost/speed trade - off in research, especially computer modeling Reduced computation time Replacing experimentation Reducing need for computationally intensive, physics - based models Saving research labor Exploring larger design spaces Address of previously unsolvable problems Model poorly understood relationships between variables Identify human - unidentifiable patterns or phenomena Downsides of artificial intelligence or machine learning in research Accuracy weaknesses Predict poorly outside regions of dense, high - quality training data Interpretability weaknesses Bounds of accuracy can be unclear Accuracy assessment can be difficult Long - run scientific progress concerns AI/ML cannot develop novel scientific theory AI/ML may bypass opportunities to identify empirical or theoretical novelties Resource issues Data acquisition and cleaning is time - intensive AI/ML models are computation - and energy - intensive to develop Inappropriate use issues Easy to over - trust May be inappropriately used to address problems soluble with simpler methods 8 Second, AI/ML models can be trained on input and output data for phenomena (e.g., complex production processes) which lack robust theoretical models, developing novel predictive capabilities in the absence of explicit, human - designed theory. This is somet imes referred to as "phenomenological modeling," as it attempts to model phenomena in the absence of mechanistic, explanatory understanding: [T]he first reason we choose to use AI is because we don't have a good model of what our system is. . . I get a bunch of data coming in and I have a bunch of sensor readings, you know. . . And I use the AI to map the bunch of sensor readings to the process health or process status or machine status that I have.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Virginia (0.04)
- (13 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (0.68)
- Research Report > Experimental Study (0.68)
From the telegraph to AI, our communications systems have always had hidden environmental costs
When we post to a group chat or talk to an AI chatbot, we don't think about how these technologies came to be. We take it for granted we can instantly communicate. We only notice the importance and reach of these systems when they're not accessible. Companies describe these systems with metaphors such as the "cloud" or "artificial intelligence", suggesting something intangible. But they are deeply material.
- Oceania > Australia (0.05)
- North America (0.05)
- Europe > United Kingdom (0.05)
- (5 more...)
- Energy (0.48)
- Information Technology > Services (0.32)
Preparing for the Intelligence Explosion
MacAskill, William, Moorhouse, Fin
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges. These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- (7 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (7 more...)
Tech billionaires are making a risky bet with humanity's future
While there's a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the "ideology of technological salvation" and warns that tech titans are using it to steer humanity in a dangerous direction. "In most of these isms you'll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders--so long as we don't get in the way of technological progress." "The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more--to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, [and] to justify nearly any action they might want to take," he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.
Where has the left's technological audacity gone? Leigh Phillips
Techno-optimism – the belief that technology will usher in a golden age for humanity – is in vogue once more. In 2022, a clutch of pseudonymous San Francisco artificial intelligence (AI) scenesters published a Substack post entitled "Effective Accelerationism", which argued for maximum acceleration of technological advancement. The 10-point manifesto, which proclaimed that "the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness" was imminent, quickly went viral, as did follow-up posts. Effective accelerationism, or "e/acc", exploded from being a fringe movement dedicated to pushing back against AI extinction-fearing "doomers" to being namechecked by major Silicon Valley CEOs such as Garry Tan, the CEO of start-up accelerator Y Combinator; Sam Altman, head of OpenAI; Marc Andreessen, the billionaire software engineer; and Elon Musk. In 2023, Andreessen issued his Techno-Optimist Manifesto, expanding beyond the e/acc's focus on AI to encompass all questions of technological progress.
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Law (0.95)
- (2 more...)
Aligning Language Models with Offline Learning from Human Feedback
Hu, Jian, Tao, Li, Yang, June, Zhou, Chandler
Learning from human preferences is crucial for language models (LMs) to effectively cater to human needs and societal values. Previous research has made notable progress by leveraging human feedback to follow instructions. However, these approaches rely primarily on online learning techniques like Proximal Policy Optimization (PPO), which have been proven unstable and challenging to tune for language models. Moreover, PPO requires complex distributed system implementation, hindering the efficiency of large-scale distributed training. In this study, we propose an offline learning from human feedback framework to align LMs without interacting with environments. Specifically, we explore filtering alignment (FA), reward-weighted regression (RWR), and conditional alignment (CA) to align language models to human preferences. By employing a loss function similar to supervised fine-tuning, our methods ensure more stable model training than PPO with a simple machine learning system~(MLSys) and much fewer (around 9\%) computing resources. Experimental results demonstrate that conditional alignment outperforms other offline alignment methods and is comparable to PPO.
- North America > United States (0.04)
- Europe > United Kingdom (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- (3 more...)
- Telecommunications (0.48)
- Banking & Finance > Economy (0.48)
- Information Technology (0.30)
Silicon Valley's Favorite Slogan Has Lost All Meaning
In early 2021, long before ChatGPT became a household name, OpenAI CEO Sam Altman self-published a manifesto of sorts, titled "Moore's Law for Everything." The original Moore's Law, formulated in 1965, describes the development of microchips, the tiny silicon wafers that power your computer. More specifically, it predicted that the number of transistors that engineers could cram onto a chip would roughly double every year. As Altman sees it, something like that astonishing rate of progress will soon apply to housing, food, medicine, education--everything. The vision is nothing short of utopian.
- North America > United States > California (0.52)
- Europe > Netherlands > Limburg > Maastricht (0.05)
- Semiconductors & Electronics (1.00)
- Information Technology (1.00)
How can we help humans thrive trillions of years from now? This philosopher has a plan
Philosopher William MacAskill coined the term "longtermism" to convey the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct and create a better future for many generations to come. He outlines this concept in his new book, What We Owe the Future. Philosopher William MacAskill coined the term "longtermism" to convey the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct and create a better future for many generations to come. He outlines this concept in his new book, What We Owe the Future. Let's say you're hiking, and you drop a piece of glass on the trail.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > Sweden (0.05)
- Law (1.00)
- Government (0.70)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.49)
The next (r)evolution: AI v human intelligence
Whenever I have had the displeasure of interacting with an obtuse online customer service bot or an automated phone service, I have come away with the conclusion that whatever "intelligence" I have just encountered was most certainly artificial and not particularly smart, and definitely not human. However, this likely would not have been the case with Google's experimental LaMDA (Language Model for Dialogue Applications). Recently, an engineer at the tech giant's Responsible AI organisation carried the chatbot to global headlines after claiming that he reached the conclusion that it is not merely a highly sophisticated computer algorithm and it possesses sentience – ie, the capacity to experience feelings and sensations. To prove his point, Blake Lemoine also published the transcript of conversations he and another colleague had with LaMDA. In response, the engineer has been suspended and put on paid leave for allegedly breaching Google's confidentiality policies.
- Health & Medicine (0.47)
- Law (0.47)